
Our robot’s autonomous mode relies on precise distance measurements provided by our
encoders. If an encoder makes an error of 0.01 inches/inch, it may not be immediately noticeable
(overflow can result in errors even larger than these, as entire bytes of velocity information can be lost).
However, if the robot moves 100 feet throughout the autonomous mode, then its encoder positioning
would be a foot off. If the robot were to try to pick up a pixel at that point, it would be unable to do so. It
is therefore essential for our encoders to be as accurate as possible. We have garnered specifications for
the wheels and encoder pods that we installed on our robot, and have tested movement continuously in
the real world. Our autonomous mode has a certain amount of error correction built in (though, at the
time of this writing, the autonomous mode is quite sparse), but focuses on specific movements only
allowed through knowledge of location, rather than a sort of detection method (ex. TensorFlow Object
Detection). Our manual control modes also rely on distance measurement but in an incremental way.
Our current forward movement methods rely on setting motor power, but we decided to create a new
recursive function while the robot runs to move forward. While the robot is running (while
(opModeIsActive())), we call the driveStraight method and other functions based on the current
percentage of tilt in the controller joysticks. These methods call the underlying Chassis.driveStraight with
a distance parameter of one inch - an amount small enough to allow finite movement, but large enough
to prevent high overhead. The function calls itself when the driveStraight method finishes, and checks if
the gamepad is still tilted - if so, it drives the robot again.
Currently, the driveStraight method pauses the robot at the end of its movement, using the stop
function (which sets all motor powers to 0). However, we have had to refactor this function so that it
only stops when the robot is truly done moving, rather than every inch, preventing “jerky” motion. We
have accomplished this by passing in a should stop parameter to the driveStraight function, which will
only be true if the robot has traveled enough.
Project Organization
Abstraction, or moving lower-level code to specific handlers, is a major feature of our project
structure. Our OpMode calls functions on our Robot model, which in turn calls its proper subpart. This
abstraction is to avoid having the OpMode contain references to any parts of the robot itself. This also
means that, when we have to change the actual behavior of a function (for example, changing the
distance parameter of movement functions from inches to feet), we can just change the lowest-level
function and then alter a slightly higher one to still take inches and then convert. We also abstracted
telemetry functionality to the Utils.OutputUtils.print function. This function avoids possible errors of
forgetting to use telemetry.update(), though it does increase overhead.
Pixel Detection
When we began to code our robot, we considered using a TensorFlow Neural Network for object
(pixel) recognition. However, we decided against using an AI to detect pixels for several reasons. Cameras
on robots are not only expensive, but also incur significant overhead, and an RGB camera would require
a large amount of power. The camera can also be destroyed or knocked off-kilter easily, which would
result in improper location detection for the pixels. The nature of Centerstage does not require a robot
to detect the vast majority of pixels in random locations - instead, the robot can find them in
predetermined spots around the field. An object detection Neural Network might also struggle with the
estimation of proper bounding boxes for objects, especially with novel backgrounds encountered at
competitions.